Goto

Collaborating Authors

 general data protection regulation


Getting Ready for the EU AI Act in Healthcare. A call for Sustainable AI Development and Deployment

Brodersen, John Brandt, Caggiano, Ilaria Amelia, Kringen, Pedro, Madai, Vince Istvan, Osika, Walter, Sartor, Giovanni, Svensson, Ellen, Westerlund, Magnus, Zicari, Roberto V.

arXiv.org Artificial Intelligence

Assessments of trustworthiness have become a cornerstone of responsible AI development. Especially in high-stakes fields like healthcare, aligning technical, evidence-based, and ethical practices with forthcoming legal requirements is increasingly urgent. We argue that developers and deployers of AI systems for the medical domain should be proactive and take steps to progressively ensure that such systems, both those currently in use and those being developed or planned, respect the requirements of the AI Act, which has come into force in August 2024. This is necessary if full and effective compliance is to be ensured when the most relevant provisions of the Act become effective (August 2026). The engagement with the AI Act cannot be viewed as a formalistic exercise. Compliance with the AI Act needs to be carried out through the proactive commitment to the ethical principles of trustworthy AI. These principles provide the background for the Act, which mentions them several times and connects them to the protection of public interest. They can be used to interpret and apply the Act's provisions and to identify good practices, increasing the validity and sustainability of AI systems over time.


Privacy-Preserving Customer Support: A Framework for Secure and Scalable Interactions

Awasthi, Anant Prakash, Agarwal, Girdhar Gopal, Singh, Chandraketu, Varma, Rakshit, Sharma, Sanchit

arXiv.org Machine Learning

The growing reliance on artificial intelligence (AI) in customer support has significantly improved operational efficiency and user experience. However, traditional machine learning (ML) approaches, which require extensive local training on sensitive datasets, pose substantial privacy risks and compliance challenges with regulations like the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA). Existing privacy-preserving techniques, such as anonymization, differential privacy, and federated learning, address some concerns but face limitations in utility, scalability, and complexity. This paper introduces the Privacy-Preserving Zero-Shot Learning (PP-ZSL) framework, a novel approach leveraging large language models (LLMs) in a zero-shot learning mode. Unlike conventional ML methods, PP-ZSL eliminates the need for local training on sensitive data by utilizing pre-trained LLMs to generate responses directly. The framework incorporates real-time data anonymization to redact or mask sensitive information, retrieval-augmented generation (RAG) for domain-specific query resolution, and robust post-processing to ensure compliance with regulatory standards. This combination reduces privacy risks, simplifies compliance, and enhances scalability and operational efficiency. Empirical analysis demonstrates that the PP-ZSL framework provides accurate, privacy-compliant responses while significantly lowering the costs and complexities of deploying AI-driven customer support systems. The study highlights potential applications across industries, including financial services, healthcare, e-commerce, legal support, telecommunications, and government services. By addressing the dual challenges of privacy and performance, this framework establishes a foundation for secure, efficient, and regulatory-compliant AI applications in customer interactions.


A Comprehensive Guide to Explainable AI: From Classical Models to LLMs

Hsieh, Weiche, Bi, Ziqian, Jiang, Chuanqi, Liu, Junyu, Peng, Benji, Zhang, Sen, Pan, Xuanhe, Xu, Jiawei, Wang, Jinlang, Chen, Keyu, Feng, Pohsun, Wen, Yizhu, Song, Xinyuan, Wang, Tianyang, Liu, Ming, Yang, Junjie, Li, Ming, Jing, Bowen, Ren, Jintao, Song, Junhao, Tseng, Hong-Ming, Zhang, Yichao, Yan, Lawrence K. Q., Niu, Qian, Chen, Silin, Wang, Yunze, Liang, Chia Xin

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) addresses the growing need for transparency and interpretability in AI systems, enabling trust and accountability in decision-making processes. This book offers a comprehensive guide to XAI, bridging foundational concepts with advanced methodologies. It explores interpretability in traditional models such as Decision Trees, Linear Regression, and Support Vector Machines, alongside the challenges of explaining deep learning architectures like CNNs, RNNs, and Large Language Models (LLMs), including BERT, GPT, and T5. The book presents practical techniques such as SHAP, LIME, Grad-CAM, counterfactual explanations, and causal inference, supported by Python code examples for real-world applications. Case studies illustrate XAI's role in healthcare, finance, and policymaking, demonstrating its impact on fairness and decision support. The book also covers evaluation metrics for explanation quality, an overview of cutting-edge XAI tools and frameworks, and emerging research directions, such as interpretability in federated learning and ethical AI considerations. Designed for a broad audience, this resource equips readers with the theoretical insights and practical skills needed to master XAI. Hands-on examples and additional resources are available at the companion GitHub repository: https://github.com/Echoslayer/XAI_From_Classical_Models_to_LLMs.


GiusBERTo: A Legal Language Model for Personal Data De-identification in Italian Court of Auditors Decisions

Salierno, Giulio, Bertè, Rosamaria, Attias, Luca, Morrone, Carla, Pettazzoni, Dario, Battisti, Daniela

arXiv.org Artificial Intelligence

Recent advances in Natural Language Processing have demonstrated the effectiveness of pretrained language models like BERT for a variety of downstream tasks. We present GiusBERTo, the first BERT-based model specialized for anonymizing personal data in Italian legal documents. GiusBERTo is trained on a large dataset of Court of Auditors decisions to recognize entities to anonymize, including names, dates, locations, while retaining contextual relevance. We evaluate GiusBERTo on a held-out test set and achieve 97% token-level accuracy. GiusBERTo provides the Italian legal community with an accurate and tailored BERT model for de-identification, balancing privacy and data protection.


The EU's AI Act: Is it unfair to insurers?

#artificialintelligence

The regulation's scope encompasses all sectors (except for military) and aims to introduce a common regulatory and legal framework for AI, ensuring that all AI systems are safe and respect existing law on fundamental rights and values. Personally, I think AI regulation and governance is very important. We've all seen the sci-fi movies where artificially intelligent robots (sorry, beings) take over the world and attempt to bring about the end of humanity as we know it, until some bruised and battered hero saves the day. While that's the worst-case scenario meant only for our screens, there are some real use-cases for AI that are actually quite scary. Think about deepfakes, for example, where AI is used to forge an image, video, or audio recording with such precision that the average human is unlikely to detect any manipulation.


Hitting the Books: How can privacy survive in a world that never forgets?

Engadget

As I write this, Amazon is announcing its purchase of iRobot, adding its room-mapping robotic vacuum technology to the company's existing home surveillance suite, the Ring doorbell and prototype aerial drone. This is in addition to Amazon already knowing what you order online, what websites you visit, what foods you eat and, soon, every last scrap of personal medical data you possess. The trend of our gadgets and infrastructure constantly, often invasively, monitoring their users shows little sign of slowing -- not when there's so much money to be made. Of course it hasn't been all bad for humanity, what with AI's help in advancing medical, communications and logistics tech in recent years. In his new book, Machines Behaving Badly: The Morality of AI, Scientia Professor of Artificial Intelligence at the University of New South Wales, Dr. Toby Walsh, explores the duality of potential that artificial intelligence/machine learning systems offer and, in the excerpt below, how to claw back a bit of your privacy from an industry built for omniscience. Published by La Trobe University Press. The Second Law of Thermodynamics states that the total entropy of a system – the amount of disorder – only ever increases.


The AI Act: Three Things To Know About AI Regulation Worldwide - AI Summary

#artificialintelligence

In 2018, the European Union introduced the General Data Protection Regulation (GDPR) which has clauses that impact AI – notably text indicating a "right to explanation" – an area that affects AI algorithms and has been the subject of much debate since its introduction. Elsewhere, local regulations have been attempted, ranging from bans on the use of certain types of AI (such as facial recognition), to committees to examine the fairness of algorithms used in resource allocation. The exact criterion and specifics of the law are still being debated, with exceptions and loopholes having been identified by a number of institutions. There are many regulations in development, and to make things even more complicated, they differ in their geographical or industry scope and in their targets. Forming a cohesive practice will make it easier to see these regulations as connected entities that are addressed together.


The CPSC Digs In On Artificial Intelligence - AI Summary

#artificialintelligence

On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC defines AI as "any method for programming computers or products to enable them to carry out tasks or behaviors that would require intelligence if performed by humans" and machine learning as "an iterative process of applying models or algorithms to data sets to learn and detect patterns and/or perform tasks, such as prediction or decision making that can approximate some aspects of intelligence."3 To inform the ongoing discussion on how to regulate AI, machine learning, and related technologies, the CPSC provides the following list of considerations: Do AI and machine learning affect consumer product safety? Do AI and machine learning affect consumer product safety? UL 4600 Standard for Safety for the Evaluation of Autonomous Products covers "fully autonomous systems that move such as self-driving cars along with applications in mining, agriculture, maintenance, and other vehicles including lightweight unmanned aerial vehicles."5


Nigel Hughes: A Non-Identifiable Data Layer On Top of Clinical Systems That Retain Memory May Be The Future of European Health Data - CIFS Health

#artificialintelligence

Nigel Hughes has a thirty-six year career spanning the NHS in the UK (16 years), NGOs and patient organisations (10 years) and within the pharmaceutical industry (18 years). He has worked clinically in HIV and viral hepatitis, liver disease, and in sales & marketing, medical affairs, market access and health economics, R&D, precision medicine, advanced diagnostics, health IT and Real World Data/Real World Medicine. His experience covers clinical, education, as an advisor, consulting, communications and lobbying over the years. He is currently the Project Lead for the IMI2 European Health Data & Evidence Network (EHDEN), and was Platform Co-Lead for the IMI1 European Medical Information Framework (EMIF), as well as consulting on numerous projects and programmes in the domain of RWD/RWE. In common with all regions and countries, our health data infrastructure is reflective of how we all organically developed systems over prior systems, apart from perhaps such countries as Estonia, who were afforded the opportunity to start afresh after the end of the Soviet Union.


Federated Learning: Collaborative Machine Learning with a Tutorial on How to Get Started - KDnuggets

#artificialintelligence

Federated learning, also known as collaborative learning, allows training models at scale on data that remains distributed on the devices where they are generated. Sensitive data remains with the owners of said data, where training is conducted, and a centralized training orchestrator of training only sees the contribution of each client through model updates. Federated learning doesn't guarantee privacy on its own (we'll touch on breaking and repairing privacy in federated learning systems later on), but it does make privacy possible. With the public and policy-makers becoming more aware of the data economy, demand for privacy-preserving machine learning is on the rise. As a result, data practices have been garnering increased scrutiny, and research on privacy-respecting tools like federated learning is increasingly active.